469 research outputs found

    Human capital risk in life-cycle economies

    Get PDF
    I study the effect of market incompleteness on the aggregate economy in a model where agents face idiosyncratic, uninsurable human capital investment risk. The environment is a general equilibrium lifecycle model with a version of a Ben-Porath (1967) human capital accumulation technology, modified to incorporate risk. A CARA-normal specification keeps endogenous decisions independent of individual shock realizations. I study stationary equilibria of calibrated cases in which idiosyncratic uninsurable risk arises from specialization risk and career risk. Specialization risk is such that both mean and variance of the return from training are increasing in the endogenous decision to invest in human capital. In the case of career risk, however, only the mean return is increasing in the decision to invest in human capital. With career risk only, stationary equilibria resemble those studied by Aiyagari (1994), and one concludes that the impact of uninsurable idiosyncratic risk is relatively small. With a significant amount of specialization risk however, stationary equilibria are severely distorted relative to a complete markets benchmark. One aspect of this distortion is that human capital is only about 57 percent as large as its complete markets counterpart. This suggests that the two types of risk have very different and quantitatively significant general equilibrium implications. Keywords: Human capital risk, life-cycle, incomplete markets.

    On the Power of Adaptivity in Matrix Completion and Approximation

    Full text link
    We consider the related tasks of matrix completion and matrix approximation from missing data and propose adaptive sampling procedures for both problems. We show that adaptive sampling allows one to eliminate standard incoherence assumptions on the matrix row space that are necessary for passive sampling procedures. For exact recovery of a low-rank matrix, our algorithm judiciously selects a few columns to observe in full and, with few additional measurements, projects the remaining columns onto their span. This algorithm exactly recovers an n×nn \times n rank rr matrix using O(nrμ0log2(r))O(nr\mu_0 \log^2(r)) observations, where μ0\mu_0 is a coherence parameter on the column space of the matrix. In addition to completely eliminating any row space assumptions that have pervaded the literature, this algorithm enjoys a better sample complexity than any existing matrix completion algorithm. To certify that this improvement is due to adaptive sampling, we establish that row space coherence is necessary for passive sampling algorithms to achieve non-trivial sample complexity bounds. For constructing a low-rank approximation to a high-rank input matrix, we propose a simple algorithm that thresholds the singular values of a zero-filled version of the input matrix. The algorithm computes an approximation that is nearly as good as the best rank-rr approximation using O(nrμlog2(n))O(nr\mu \log^2(n)) samples, where μ\mu is a slightly different coherence parameter on the matrix columns. Again we eliminate assumptions on the row space

    Noise-adaptive Margin-based Active Learning and Lower Bounds under Tsybakov Noise Condition

    Full text link
    We present a simple noise-robust margin-based active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise. We show that when the imposed noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others 1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise and achieves optimal statistical rate up to poly-logarithmic factors. We also derive lower bounds for margin based active learning algorithms under Tsybakov noise conditions (TNC) for the membership query synthesis scenario (Angluin 1988). Our result implies lower bounds for the stream based selective sampling scenario (Cohn 1990) under TNC for some fairly simple data distributions. Quite surprisingly, we show that the sample complexity cannot be improved even if the underlying data distribution is as simple as the uniform distribution on the unit ball. Our proof involves the construction of a well separated hypothesis set on the d-dimensional unit ball along with carefully designed label distributions for the Tsybakov noise condition. Our analysis might provide insights for other forms of lower bounds as well.Comment: 16 pages, 2 figures. An abridged version to appear in Thirtieth AAAI Conference on Artificial Intelligence (AAAI), which is held in Phoenix, AZ USA in 201

    Learning and the Great Moderation

    Get PDF
    We study a stylized theory of the volatility reduction in the U.S. after 1984—the Great Moderation—which attributes part of the stabilization to less volatile shocks and another part to more difficult inference on the part of Bayesian households attempting to learn the latent state of the economy. We use a standard equilibrium business cycle model with technology following an unobserved regime-switching process. After 1984, according to Kim and Nelson (1999a), the variance of U.S. macroeconomic aggregates declined because boom and recession regimes moved closer together, keeping conditional variance unchanged. In our model this makes the signal extraction problem more difficult for Bayesian households, and in response they moderate their behavior, reinforcing the effect of the less volatile stochastic technology and contributing an extra measure of moderation to the economy. We construct example economies in which this learning effect accounts for about 30 percent of a volatility reduction of the magnitude observed in the postwar U.S. data.business cycles; regime-switching; Bayesian learning; information

    Detecting Activations over Graphs using Spanning Tree Wavelet Bases

    Full text link
    We consider the detection of activations over graphs under Gaussian noise, where signals are piece-wise constant over the graph. Despite the wide applicability of such a detection algorithm, there has been little success in the development of computationally feasible methods with proveable theoretical guarantees for general graph topologies. We cast this as a hypothesis testing problem, and first provide a universal necessary condition for asymptotic distinguishability of the null and alternative hypotheses. We then introduce the spanning tree wavelet basis over graphs, a localized basis that reflects the topology of the graph, and prove that for any spanning tree, this approach can distinguish null from alternative in a low signal-to-noise regime. Lastly, we improve on this result and show that using the uniform spanning tree in the basis construction yields a randomized test with stronger theoretical guarantees that in many cases matches our necessary conditions. Specifically, we obtain near-optimal performance in edge transitive graphs, kk-nearest neighbor graphs, and ϵ\epsilon-graphs

    Inventory Mistakes and the Great Moderation

    Get PDF
    Why did the volatility of U.S. real GDP decline by more than the volatility of final sales with the Great Moderation in the mid-1980s? One possible explanation is that firms shifted their inventory behaviour towards a greater emphasis on production smoothing. We investigate the role of inventories in the Great Moderation by estimating an unobserved components model that identifies inventory and sales shocks and their propagation in the aggregate data. Our findings suggest little evidence of increased production smoothing. Instead, a reduction in inventory mistakes explains the excess volatility reduction in output relative to sales. The inventory mistakes are informational errors related to production that must be set in advance and their reduction also helps to explain the changed forecasting role of inventories since the mid-1980s. Our findings provide an optimistic prognosis for the continuation of the Great Moderation despite the dramatic movements in output during the recent economic crisis.inventories; unobserved components model; inventory mistakes; production smoothing; Great Moderation
    corecore